Incremental Learning from Positive Data

نویسندگان

  • Steffen Lange
  • Thomas Zeugmann
چکیده

The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every innnite sequence of elements exhausting c is called positive presentation of c. An algorith-mic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time, and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the concept to be learned. This basic scenario is referred to as iterative learning. Iterative inference can be reened by allowing the learner to store an a priori bounded number of carefully chosen examples resulting in bounded example memory inference (cf. Fulk, Jain and Osherson 7]). Additionally, feedback identiication is introduced. Now, the learner is enabled to ask whether or not a particular element did already appear in the data provided so far. Our results are threefold. First, the learning capabilities of the various models of incremental learning are related to previously studied learning models. It is proved that incremental learning can be always simulated by inference devices that are both set-driven and conservative. Second, feedback learning is shown to be more powerful than iterative inference, and its learning power is incomparable to that of bounded example memory inference which itself extends that of iterative learning, too. In particular, the learning power of bounded example memory inference always increases if the number of examples the learner is allowed to store is incremented. Third, a suucient condition for iterative inference allowing non-enumerative learning is provided. The results obtained provide strong evidence that there is no unique way to design superior incremental learning algorithms. Instead, incremental learning is the art of knowing what to overlook.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the strength of incremental learningSte

This paper provides a systematic study of incremental learning from noise-free and from noisy data, thereby distinguishing between learning from only positive data and from both positive and negative data. Our study relies on the notion of noisy data introduced in 22]. The basic scenario, named iterative learning, is as follows. In every learning stage, an algorithmic learner takes as input one...

متن کامل

On the power of incremental

This paper provides a systematic study of incremental learning from noise-free and from noisy data. As usual, we distinguish between learning from positive data and learning from positive and negative data, synonymously called learning from text and learning from informant. Our study relies on the notion of noisy data introduced by Stephan. The basic scenario, named iterative learning, is as fo...

متن کامل

Some natural conditions on incremental learning

The present study aims at insights into the nature of incremental learning in the context of Gold’s model of identification in the limit. With a focus on natural requirements such as consistency and conservativeness, incremental learning is analysed both for learning from positive examples and for learning from positive and negative examples. The results obtained illustrate in which way differe...

متن کامل

Towards a Better Understanding of Incremental Learning

The present study aims at insights into the nature of incremental learning in the context of Gold’s model of identification in the limit. With a focus on natural requirements such as consistency and conservativeness, incremental learning is analysed both for learning from positive examples and for learning from positive and negative examples. The results obtained illustrate in which way differe...

متن کامل

Modeling Incremental Learning from Positive Data

The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every in nite sequence of elements exhausting c is called positive presentation of c. An algorithmic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time, and outputs a new hyp...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Comput. Syst. Sci.

دوره 53  شماره 

صفحات  -

تاریخ انتشار 1996